The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The crossMoDA challenge aims to automatically segment the vestibular schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans by leveraging labeled contrast-enhanced T1 scans. The 2022 edition extends the segmentation task by including multi-institutional scans. In this work, we proposed an unpaired cross-modality segmentation framework using data augmentation and hybrid convolutional networks. Considering heterogeneous distributions and various image sizes for multi-institutional scans, we apply the min-max normalization for scaling the intensities of all scans between -1 and 1, and use the voxel size resampling and center cropping to obtain fixed-size sub-volumes for training. We adopt two data augmentation methods for effectively learning the semantic information and generating realistic target domain scans: generative and online data augmentation. For generative data augmentation, we use CUT and CycleGAN to generate two groups of realistic T2 volumes with different details and appearances for supervised segmentation training. For online data augmentation, we design a random tumor signal reducing method for simulating the heterogeneity of VS tumor signals. Furthermore, we utilize an advanced hybrid convolutional network with multi-dimensional convolutions to adaptively learn sparse inter-slice information and dense intra-slice information for accurate volumetric segmentation of VS tumor and cochlea regions in anisotropic scans. On the crossMoDA2022 validation dataset, our method produces promising results and achieves the mean DSC values of 72.47% and 76.48% and ASSD values of 3.42 mm and 0.53 mm for VS tumor and cochlea regions, respectively.
translated by 谷歌翻译
Deep learning methods have contributed substantially to the rapid advancement of medical image segmentation, the quality of which relies on the suitable design of loss functions. Popular loss functions, including the cross-entropy and dice losses, often fall short of boundary detection, thereby limiting high-resolution downstream applications such as automated diagnoses and procedures. We developed a novel loss function that is tailored to reflect the boundary information to enhance the boundary detection. As the contrast between segmentation and background regions along the classification boundary naturally induces heterogeneity over the pixels, we propose the piece-wise two-sample t-test augmented (PTA) loss that is infused with the statistical test for such heterogeneity. We demonstrate the improved boundary detection power of the PTA loss compared to benchmark losses without a t-test component.
translated by 谷歌翻译
在现实世界中,时间序列的课程通常在最后一次标记,但是许多应用程序需要在每个时间点进行分类时间序列。例如关键患者的结果仅在最后确定,但应始终诊断出他以及时治疗。因此,我们提出了一个新概念:时间序列的连续分类(CCT)。它要求模型在不同的时间阶段学习数据。但是时间序列动态发展,导致不同的数据分布。当模型学习多分布时,它总是会忘记或过度贴身。我们建议,有意义的学习计划是由于一个有趣的观察而潜在的:通过信心来衡量,模型学习多个分布的过程类似于人类学习的过程多重知识。因此,我们提出了一种新型的CCT(C3T)的置信度引导方法。它可以模仿邓宁·克鲁格效应所描述的交替人类信心。我们定义了安排数据的客观信心,以及控制学习持续时间的自信。四个现实世界数据集的实验表明,C3T比CCT的所有基准更准确。
translated by 谷歌翻译
主动扬声器检测在人机相互作用中起着至关重要的作用。最近,出现了一些端到端的视听框架。但是,这些模型的推理时间没有被探索,并且由于其复杂性和较大的输入大小而不适用于实时应用。此外,他们探索了类似的功能提取策略,该策略在音频和视觉输入中采用了Convnet。这项工作提出了一种新型的两流端到端框架融合,通过VGG-M从图像中提取的特征与原始MEL频率Cepstrum系数从音频波形提取。该网络在每个流上附有两个BigRu层,以处理融合之前每个流的时间动态。融合后,将一个BigRU层附着在建模联合时间动力学上。 AVA-ACTIVESPEAKER数据集的实验结果表明,我们的新功能提取策略对嘈杂信号的鲁棒性和推理时间比在这两种模式上使用Convnet的模型更好。提出的模型预测44.41 ms之内,足够快地用于实时应用程序。我们表现​​最佳的模型获得了88.929%的精度,与最先进的工作相同。
translated by 谷歌翻译
促性腺营养蛋白释放激素受体(GNRH1R)是治疗子宫疾病的有前途的治疗靶标。迄今为止,在临床研究中可以使用几个GNRH1R拮抗剂,而不满足多个财产约束。为了填补这一空白,我们旨在开发一个基于学习的框架,以促进有效,有效地发现具有理想特性的新的口服小型分子药物靶向GNRH1R。在目前的工作中,首先通过充分利用已知活性化合物和靶蛋白的结构的信息,首先提出了配体和结构组合模型,即LS-Molgen,首先提出了分子生成的方法,该信息通过其出色的性能证明了这一点。比分别基于配体或结构方法。然后,进行了A中的计算机筛选,包括活性预测,ADMET评估,分子对接和FEP计算,其中约30,000个生成的新型分子被缩小到8,以进行实验合成和验证。体外和体内实验表明,其中三个表现出有效的抑制活性(化合物5 IC50 = 0.856 nm,化合物6 IC50 = 0.901 nm,化合物7 IC50 = 2.54 nm对GNRH1R,并且化合物5在基本PK属性中表现良好例如半衰期,口服生物利用度和PPB等。我们认为,提议的配体和结构组合结合的分子生成模型和整个计算机辅助工作流程可能会扩展到从头开始的类似任务或铅优化的类似任务。
translated by 谷歌翻译
基于对比度学习的基于自我监督的骨架识别引起了很多关注。最近的文献表明,数据增强和大量对比度对对于学习此类表示至关重要。在本文中,我们发现,基于正常增强的直接扩展对对比对的表现有限,因为随着培训的进展,对比度对从正常数据增强到损失的贡献越小。因此,我们深入研究了对比对比对的,以进行对比学习。由混合增强策略的成功激励,通过综合新样本来改善许多任务的执行,我们提出了Skelemixclr:一种与时空的学习框架,具有时空骨架混合增强(Skelemix),以补充当前的对比样品,以补充当前的对比样品。首先,Skelemix利用骨架数据的拓扑信息将两个骨骼序列混合在一起,通过将裁切的骨骼片段(修剪视图)与其余的骨架序列(截断视图)随机梳理。其次,应用时空掩码池在特征级别上分开这两个视图。第三,我们将对比度对与这两种观点扩展。 SkelemixClr利用修剪和截断的视图来提供丰富的硬对比度对,因为它们由于图形卷积操作而涉及彼此的某些上下文信息,这使模型可以学习更好的运动表示以进行动作识别。在NTU-RGB+D,NTU120-RGB+D和PKU-MMD数据集上进行了广泛的实验表明,SkelemixClr实现了最先进的性能。代码可在https://github.com/czhaneva/skelemixclr上找到。
translated by 谷歌翻译
复杂的水下环境为物体检测带来了新的挑战,例如未平衡的光条件,低对比度,阻塞和水生生物的模仿。在这种情况下,水下相机捕获的物体将变得模糊,并且通用探测器通常会在这些模糊的物体上失败。这项工作旨在从两个角度解决问题:不确定性建模和艰难的例子采矿。我们提出了一个名为Boosting R-CNN的两阶段水下检测器,该检测器包括三个关键组件。首先,提出了一个名为RetinArpn的新区域建议网络,该网络提供了高质量的建议,并考虑了对象和IOU预测,以确定对象事先概率的不确定性。其次,引入了概率推理管道,以结合第一阶段的先验不确定性和第二阶段分类评分,以模拟最终检测分数。最后,我们提出了一种名为Boosting Reweighting的新的硬示例挖掘方法。具体而言,当区域提案网络误认为样品的对象的事先概率时,提高重新加权将在训练过程中增加R-CNN头部样品的分类损失,同时减少具有准确估计的先验的简易样品丢失。因此,可以在第二阶段获得强大的检测头。在推理阶段,R-CNN具有纠正第一阶段的误差以提高性能的能力。在两个水下数据集和两个通用对象检测数据集上进行的全面实验证明了我们方法的有效性和鲁棒性。
translated by 谷歌翻译
以前的工作通常认为,改善卷积网络的空间不变性是对象计数的关键。但是,在验证了几个主流计数网络之后,我们出人意料地发现,太严格的像素级空间不变性将导致密度图生成中的噪声过高。在本文中,我们尝试使用本地连接的高斯内核来替换原始的卷积过滤器,以估计密度图中的空间位置。这样做的目的是允许特征提取过程潜在刺激密度生成过程以克服注释噪声。受到先前工作的启发,我们提出了一个低级别的近似值,并伴随着翻译不变性,以有利地实施大量高斯卷积的近似值。我们的工作指向了后续研究的新方向,该方向应该研究如何正确放松对象计数过于严格的像素级空间不变性。我们在4个主流对象计数网络(即MCNN,CSRNET,SANET和RESNET-50)上评估我们的方法。在7个流行的基准测试中进行了大量实验,用于3种应用(即人群,车辆和植物计数)。实验结果表明,我们的方法明显优于其他最先进的方法,并实现对物体空间位置的有希望的学习。
translated by 谷歌翻译
许多真实应用程序的预测任务需要在用户的事件序列中模拟多阶特征交互以获得更好的检测性能。然而,现有的流行解决方案通常遭受两个关键问题:1)仅关注特征交互并无法捕获序列影响;2)仅关注序列信息,但忽略每个事件的内部特征关系,因此无法提取更好的事件表示。在本文中,我们考虑使用用户的事件顺序捕获分层信息的两级结构:1)基于基于事件表示的学习有效特征交互;2)建模用户历史事件的序列表示。工业和公共数据集的实验结果清楚地表明,与最先进的基线相比,我们的模式实现了更好的性能。
translated by 谷歌翻译